New Computer Vision Model Released

We’ve just switched over from a 38,000 taxon model from July 2021 to a 55,000 taxa model. You can read more about the other differences on the iNaturalist blog. Thanks to all who contributed observations and identifications!

50 Likes

Hope it results in many easy to confirm IDs. Looks like 18% of species with at least 1 RG observation are in the model using the totals in Explore (disclaimer: not exact math since some of the taxa in the model are genus or higher).

1 Like

A question about the 55000 number: does this count taxa above the genus level and above? Or does the 55000 only refer to how many leaf taxa (taxa without decedents) are in the model?

2 Likes

It’s 55k leaf taxa, distinct labels the CV model is directly trained on.

5 Likes

Thank you! Do you happen to know off the top of your head how many total taxa are in the model when you include leaf + non-leaf taxa?

About 85k.

9 Likes

Big increase in included taxa!
Nice to have in place before the summer.
Great stuff.

I´d love to attempt to join in with the Kaggle competition.
Know if it will be the same time as last year (end of May) ?

2 Likes

I don’t believe there is an iNat-specific vision challenge at FGVC this year, but there are a handful of related challenges, from clade-specific challenges like snakes and fungi to camera trap recognition to a geo-spatial challenge:
https://sites.google.com/view/fgvc9/competitions
I am particularly excited to learn what comes out of the GeoLifeCLEF2022 challenge, since I believe it has the most potential for a groundbreaking improvement in species classification at the scale of iNaturalist.

As far as the timeline, CVPR is in June this year and typically the contests wrap up a few weeks before, so late May sounds about right to me. I think each challenge runs on its own schedule since they each have different organizers.

3 Likes

I am beginning to feel like a bot, automatically replying every time the Computer Vision get’s a new thread, but anyways - here I go again:

Has it been considered (would it be even possible?) to get those taxa eliminated from the CV suggestion-pool, which have shown to be notoriously mis-identified?
Because, no matter how many more taxa will be included in future analyses, there will always be examples where a large amount of those IDs will be incorrect.

Some examples:
Scaptomyza suggested for all kinds of different fly families
Succinea putris (usually only ID on family or genus level possible)
Mosses in general are spilling over with wrong species-IDs based on CV

So I am proposing a reflective learning algorithm that evaluates its own suggestions by looking at subsequent (dis)agreements with the initial (CV-)ID.
The idea is to include this in the regular training rounds, and if a certain threshold is met, the CV will be more careful and restrict suggestions to a higher taxonomic level.

Would like to hear from developers, if this is possible (in the future) at all, and then I might remain quiet :slightly_smiling_face:

14 Likes

Sounds like the developers might have their hands full chasing existing leads and challenges. But perhaps you could prove it out yourself and share the code on Github?

1 Like

More complicated algorith ould be cool, but most of mosses are idable by cv, many already have high rates of correct ids, for others just need more correct ids (moss problems stems from human errors), snails too, I get ids from experts on this particular species, so definitely idable from photos.

3 Likes

I have noticed that crayfish are particularly difficult for the CV to ID. Even ones that are easily identified by an expert are apparently difficult for CV. Interestingly for the Parkhill Prairie crayfish, which has just been added to the model https://www.inaturalist.org/observations/identify?reviewed=any&quality_grade=needs_id%2Cresearch&place_id=1&verifiable=true&taxon_id=110538, it gets most of the recent observations (added after model?) wrong, but the older ones seem to be mostly correctly identified. What was the date cutoff for inclusion in the latest model?

1 Like

I’m not sure I understand the algorithm that you’re proposing, but if you can share pseudocode or a proof of concept, I’d love to read more.

Also, I mentioned in the blog post linked above that we plan to continue to look at new approaches to including or excluding nodes in the taxonomy. So we’re on the same page with you, we see a problem here and we want to tackle it. I’m really focused on two other projects right now (speeding up training / releasing models more often, and new approaches to geofrequency) but once I have some time, I want to do some evaluation in this area.

I hope you can understand that we’re trying to balance a few competing priorities here: we are excited to have more and more taxa represented in the models and we would like to see predictions made for those taxa, but we’re also disappointed when the model makes mistakes on taxa that are in the model and we wish it was more conservative in those cases.

In the past Cassi made this great thread https://forum.inaturalist.org/t/computer-vision-clean-up-archive/7281 and it would be great to have something like this as a starting point for finding taxa the community thinks the vision model struggles with.

7 Likes

We made the export in early November.

1 Like

Dear Alex, thanks for the reply.
Please note that my suggestion wasn’t meant as

‘I want this, so please make it happen’

but rather as a request and an attempt to understand if this is possible at all.

Here’s what I pictured:

a.) Count all observations of Taxon xyz where the CV suggestion was chosen as the initial ID - indicated by this symbol: grafik
b.) Count the subset of above observations where subsequently another user disagreed this is Taxon xyz
c.) define a threshold [= % of b)/a)] → if met, then the CV decides to not include this taxon in its suggestions
d.) include geography in above model (as with ‘seen nearby’ suggestions)


I haven’t written a line of code in my life, so I cannot provide any technical concept - the core idea was that if it can be tracked how often a taxon ID suggested by the CV receives subsequent disagreements, then this could become an automated learning process without the need of a curated ‘problematic species list’ (I contributed to Cassi’s thread myself by adding both species to the list and removing others which were successfully cleaned up). Or, in other words, the activities of IDers will be combined with the visual identification model.

Take for example the snail Succinea putris (see also the flag and comments)

Due to massive efforts, there are now almost no observations for this species in America, but a short time ago, there were more than 1,000 - with a constant influx of new ones, thanks to AI suggestions.
CV probably learned from European RG-observations, as the species likely not occurs in NA at all.

I imagine the CV could be trained that way:
At one cut-off date, in 'continent: North America' there were 8,346 initial 'taxon: Succinea putris-IDs where the observer chose the AI-suggestion, and 8,344 of those have received subsequent disagreements, which is higher than the threshold of xy% – so the AI will not suggest S. putris for North America (but might still do so for Europe, or in a future learning round).

I myself helped getting the flesh fly Sarcophaga carnaria


out of the CV pool by reducing the observations way below 100 - now there are almost no new observations on species level (the genus can generally only be IDed by genitalia).
However, with the amber snail that approach would not be possible due to many RG-observations in Europe, where they are probably correct. Plus, for the almost identically looking set of American Amber Snail species, probably internal genitalia structures are needed for reliable IDs, so it is unlikely that there will ever be enough observations to teach the AI alternative suggestions.

With the ‘self-critical’ AI model suggested here, such situations seem to be better manageable.
This is not a feature request, as this still a very rough draft, but maybe worth considering to pursue? Proof of concept would of course be needed

14 Likes

This is explained well and is something I’ve thought about and I think I’ve seen it mentioned by others in the forum as well.

Based on my understanding of how IDs work, this step might be based on what the community taxon is instead. E.g.:

  • count the number of observations where the community taxon is not the same as that initial ID

In both of the species examples above, the community taxon would be at a broader level/higher rank taxon than the initially suggested species due to disagreeing IDs.

2 Likes

@alex when the list of suggestions comes up, I would like to be able to ‘thumbsdown’ what I can immediately see is wrong. Would like to be able to feed that info back into the system.

Especially when iNat says confidently, pretty sure it is Wrong species and way out of range at that.

7 Likes

I second this - it would be great if the models were able to learn from feedback. At the moment, as I understand it, each model is based on a snapshot of identified images, but there is no way to update a model iteratively with feedback, other than simply building a new model later with a new snapshot.

1 Like

That’s correct Unfortunately it’s easier said than done to implement something like this. One alarming concern is that ML systems built in the way you describe are susceptible to sabotage by determined bad actors - Microsoft’s unfortunate Tay provides a helpful lesson here. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

7 Likes

Thanks for the reply, Alex! I can see that the challenges must be enormous. The Tay story is sobering, although I like to hope the iNat community has far fewer bad actors than Twitter. It sounds like this might be something for the long term, with little realistic possibility of building a “learning” model in the short term.

However, I liked the suggestion by @carnifex and perhaps it would be possible to implement something of this sort as a sort of filter on CV suggestions

1 Like