New Computer Vision Model Released

I don’t believe there is an iNat-specific vision challenge at FGVC this year, but there are a handful of related challenges, from clade-specific challenges like snakes and fungi to camera trap recognition to a geo-spatial challenge:
https://sites.google.com/view/fgvc9/competitions
I am particularly excited to learn what comes out of the GeoLifeCLEF2022 challenge, since I believe it has the most potential for a groundbreaking improvement in species classification at the scale of iNaturalist.

As far as the timeline, CVPR is in June this year and typically the contests wrap up a few weeks before, so late May sounds about right to me. I think each challenge runs on its own schedule since they each have different organizers.

3 Likes

I am beginning to feel like a bot, automatically replying every time the Computer Vision get’s a new thread, but anyways - here I go again:

Has it been considered (would it be even possible?) to get those taxa eliminated from the CV suggestion-pool, which have shown to be notoriously mis-identified?
Because, no matter how many more taxa will be included in future analyses, there will always be examples where a large amount of those IDs will be incorrect.

Some examples:
Scaptomyza suggested for all kinds of different fly families
Succinea putris (usually only ID on family or genus level possible)
Mosses in general are spilling over with wrong species-IDs based on CV

So I am proposing a reflective learning algorithm that evaluates its own suggestions by looking at subsequent (dis)agreements with the initial (CV-)ID.
The idea is to include this in the regular training rounds, and if a certain threshold is met, the CV will be more careful and restrict suggestions to a higher taxonomic level.

Would like to hear from developers, if this is possible (in the future) at all, and then I might remain quiet :slightly_smiling_face:

13 Likes

Sounds like the developers might have their hands full chasing existing leads and challenges. But perhaps you could prove it out yourself and share the code on Github?

1 Like

More complicated algorith ould be cool, but most of mosses are idable by cv, many already have high rates of correct ids, for others just need more correct ids (moss problems stems from human errors), snails too, I get ids from experts on this particular species, so definitely idable from photos.

3 Likes

I have noticed that crayfish are particularly difficult for the CV to ID. Even ones that are easily identified by an expert are apparently difficult for CV. Interestingly for the Parkhill Prairie crayfish, which has just been added to the model https://www.inaturalist.org/observations/identify?reviewed=any&quality_grade=needs_id%2Cresearch&place_id=1&verifiable=true&taxon_id=110538, it gets most of the recent observations (added after model?) wrong, but the older ones seem to be mostly correctly identified. What was the date cutoff for inclusion in the latest model?

1 Like

I’m not sure I understand the algorithm that you’re proposing, but if you can share pseudocode or a proof of concept, I’d love to read more.

Also, I mentioned in the blog post linked above that we plan to continue to look at new approaches to including or excluding nodes in the taxonomy. So we’re on the same page with you, we see a problem here and we want to tackle it. I’m really focused on two other projects right now (speeding up training / releasing models more often, and new approaches to geofrequency) but once I have some time, I want to do some evaluation in this area.

I hope you can understand that we’re trying to balance a few competing priorities here: we are excited to have more and more taxa represented in the models and we would like to see predictions made for those taxa, but we’re also disappointed when the model makes mistakes on taxa that are in the model and we wish it was more conservative in those cases.

In the past Cassi made this great thread https://forum.inaturalist.org/t/computer-vision-clean-up-archive/7281 and it would be great to have something like this as a starting point for finding taxa the community thinks the vision model struggles with.

7 Likes

We made the export in early November.

1 Like

Dear Alex, thanks for the reply.
Please note that my suggestion wasn’t meant as

‘I want this, so please make it happen’

but rather as a request and an attempt to understand if this is possible at all.

Here’s what I pictured:

a.) Count all observations of Taxon xyz where the CV suggestion was chosen as the initial ID - indicated by this symbol: grafik
b.) Count the subset of above observations where subsequently another user disagreed this is Taxon xyz
c.) define a threshold [= % of b)/a)] → if met, then the CV decides to not include this taxon in its suggestions
d.) include geography in above model (as with ‘seen nearby’ suggestions)


I haven’t written a line of code in my life, so I cannot provide any technical concept - the core idea was that if it can be tracked how often a taxon ID suggested by the CV receives subsequent disagreements, then this could become an automated learning process without the need of a curated ‘problematic species list’ (I contributed to Cassi’s thread myself by adding both species to the list and removing others which were successfully cleaned up). Or, in other words, the activities of IDers will be combined with the visual identification model.

Take for example the snail Succinea putris (see also the flag and comments)

Due to massive efforts, there are now almost no observations for this species in America, but a short time ago, there were more than 1,000 - with a constant influx of new ones, thanks to AI suggestions.
CV probably learned from European RG-observations, as the species likely not occurs in NA at all.

I imagine the CV could be trained that way:
At one cut-off date, in 'continent: North America' there were 8,346 initial 'taxon: Succinea putris-IDs where the observer chose the AI-suggestion, and 8,344 of those have received subsequent disagreements, which is higher than the threshold of xy% – so the AI will not suggest S. putris for North America (but might still do so for Europe, or in a future learning round).

I myself helped getting the flesh fly Sarcophaga carnaria


out of the CV pool by reducing the observations way below 100 - now there are almost no new observations on species level (the genus can generally only be IDed by genitalia).
However, with the amber snail that approach would not be possible due to many RG-observations in Europe, where they are probably correct. Plus, for the almost identically looking set of American Amber Snail species, probably internal genitalia structures are needed for reliable IDs, so it is unlikely that there will ever be enough observations to teach the AI alternative suggestions.

With the ‘self-critical’ AI model suggested here, such situations seem to be better manageable.
This is not a feature request, as this still a very rough draft, but maybe worth considering to pursue? Proof of concept would of course be needed

12 Likes

This is explained well and is something I’ve thought about and I think I’ve seen it mentioned by others in the forum as well.

Based on my understanding of how IDs work, this step might be based on what the community taxon is instead. E.g.:

  • count the number of observations where the community taxon is not the same as that initial ID

In both of the species examples above, the community taxon would be at a broader level/higher rank taxon than the initially suggested species due to disagreeing IDs.

2 Likes

@alex when the list of suggestions comes up, I would like to be able to ‘thumbsdown’ what I can immediately see is wrong. Would like to be able to feed that info back into the system.

Especially when iNat says confidently, pretty sure it is Wrong species and way out of range at that.

6 Likes

I second this - it would be great if the models were able to learn from feedback. At the moment, as I understand it, each model is based on a snapshot of identified images, but there is no way to update a model iteratively with feedback, other than simply building a new model later with a new snapshot.

1 Like

That’s correct Unfortunately it’s easier said than done to implement something like this. One alarming concern is that ML systems built in the way you describe are susceptible to sabotage by determined bad actors - Microsoft’s unfortunate Tay provides a helpful lesson here. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

7 Likes

Thanks for the reply, Alex! I can see that the challenges must be enormous. The Tay story is sobering, although I like to hope the iNat community has far fewer bad actors than Twitter. It sounds like this might be something for the long term, with little realistic possibility of building a “learning” model in the short term.

However, I liked the suggestion by @carnifex and perhaps it would be possible to implement something of this sort as a sort of filter on CV suggestions

1 Like

Yeah, we’re interested in this as well. As I mentioned above, once I wrap up my two current projects I plan to do some evaluation in this area.

4 Likes

My concern with doing it this way is that if the CV decides not to include a particular taxon in suggestions but the user accepts whatever its new guess is, that guess may be now even less likely to be correct than the original guess would have been. And having predictable mis-IDs is a big advantage for people trying to clean up a taxon to get new taxa into the CV. I am currently looking at a taxon that was previously not in the CV and more or less 100% misidentified as another species, which made it easy to clean up last fall to get it into the new CV model, and now spot checking a few observations the CV suggestions appears to have jumped to basically 100% accurate at splitting between the two species for its first suggestion. I suspect it would have been much harder for the jump in accuracy to be as clean if the mis-identifications were less predictable.

1 Like

I’ve only said that I plan to do evaluation because I’m unsure of what approach makes sense at this point - I think we have to account for things like amount of training data (might more training data solve this?), accuracy at a higher rank if a taxon were excluded (would an intervention even make things better?), whether it makes sense to train as usual and exclude post-hoc, or whether it makes more sense to exclude target taxa before training, etc.

However, examining what taxa are included in the model is a priority.

2 Likes

I’d second this. The huge majority of new gall taxa we’ve been able to push across the training threshold in the past two years have come not from people tagging their observations in gall projects etc, but just by looking at the observations that had been placed in the few gall taxa the CV knew in the prior round. Having them grouped in only a few places has made it much easier to correct them manually, whereas a more cautious or “correct” CV guess might have made them hard to find.

5 Likes

Makes sense, and the ducks are a good example. I’ve wondered whether there could be a good way to have, say, ‘genus y except species x’ as a leaf in the CV, i.e. in a genus where 99% of the observations are 1 species and few or none of the other species have enough observations to be included by themselves, but in aggregate have enough to be a leaf.

It suppose from an interface perspective it might still be simplest to display it as just ‘genus y’ in the suggestions, but perhaps if the CV thinks it has significantly higher weight than ‘species x’ it could inhibit the species x suggestion from appearing or penalize it in the ranking or something.

2 Likes

I would love for this to happen!

I’m confused by the

We are pretty sure it is X
followed by our best suggestion is Y, which is something quite different.

1 Like