Hi, I’m a big fan of the CV model that powers the species suggestions on iNat. I have always wanted to use the dataset that the iNat team, open sourced, but never had the time to officially take part in the kaggle competition. I finally managed to get around to this and I’ve written up a bit about my learnings here
I think folks interested in Computer Vision might find it useful (there is some technical jargon) to see what these models learn. To be clear, this is a personal project (and the model is mediocre at best with ~ 80% accuracy). While the “real iNat model” would obviously be different, I think there could be some similarities in what these models learn.
Some patterns I noticed about the model.
The model does well on strikingly patterned groups like butterflies, sea snails and beetles
The model struggles on groups that are not so visually sharply defined. A long shot of pine trees or a perched cormorant does not provide enough clues. All rabbits (lagomorphs) are brown furry balls.
I am also running a webapp that lets you peek at the model predictions.
This is the link. I am hosting this on a “free tier” GCP VM, so I will turn it off at some point next month. Some rough tips on using this app are here
A big thank you to the team at iNat and the great community ! iNat truly makes my life better !
Thanks for reading !
Edit - It looks like the web app, shut down some time back. I have restarted it and updated the links.
Just in case, this is address http://35.224.94.168:8080/ (the ip address should not change)
Thank you for sharing. If you are planning on publishing your pipeline, please consider including a requirements.txt or even better a conda environment.yml file for people in the future who want to get your pipeline running on their machines. Including a step in the training pipeline that downloads and extracts the dataset is also very helpful.
Thank you for mentioning Russulas! I would expect the fungi to be among the groups with the worst accuracy ratings. It would be fantastic if you could do a little exploration of fungi (and maybe even slime moulds). Fungi and slimes get confused for each other all the time, as well as with corals, leaves, insect homes made of mud, and all sorts of other things.
Thank you for the kind words ! Right now, I don’t think there’s anything truly novel enough for a publication (at least in CV) . I do want t explore a loss function that’s more aligned with the taxonomy and if those results look nice, it might be worth taking a look again !
Yes, I too did think that the model would struggle with Fungi. However, the overall accuracy for Fungi as a whole seems to be about average at 77% . However, note that there are only 3.4 K (out of 100K) images for fungi in the test dataset. I’m not sure how the overall distribution for fungi is on iNat, but I suspect that Fungi have a much lower “research grade” distribution. Which goes back to Fungi being difficult to ID only off the image. The images that are in the training/test set are probably clear enough to get into “research grade” , which probably shows the model in better light.
Thank you for your write down. I also play around with some computer vision models but lacking lot of background and this was a really good read.
As for you problem with flower / insect I hope in the far far future iNat will move to object detection algorithms which would allow multiple species in one image. Of course this would mean a lot of change also in the platform logic itself.
Hmm, I’m wondering about the quality of the datasets for fungi, even when they include only research grade observations.
The “Fungi” dataset samples presented above shows 4 lichens and one actual fungus that is not a lichen. I’d be curious to know what the fungi dataset would be like if it were free from lichens.
I have also personally seen a lot of fungal observations “identified” to species by overzealous people who don’t realize that particular genera are impossible to ID to the species level without microscopic examination. A lot of common species names are also applied by people just getting into mycology who don’t know better (e.g. lots of shelf fungi identified as chicken of the woods, turkey tail, or dryad’s saddle when they’re really something else).
There are at least two images shown in the examples of Pezizales above that look like Helvella species, which are in the Pezizales but are not at all typical of the group. There are a ton of species in the Pezizales, but not all of them are easy to distinguish from each other, especially at the genus level or lower. I would expect that a lot of observations of Pezizales are labelled as the genus Peziza when they’re actually Pezizales but in a different genus.
Anyway, just some thoughts on this – I love iNat and hope to contribute to help improve it.
For sure. The number of blurry images of fungi (which are stationary. . .) or fungi rotten beyond the point of no return in iNat observations is impressive, as are the number of photos of fungi taken from above and without enough contextual information to identify them. (When a fungus is growing on another organism, knowing what that other organism is is often crucial to identifying it properly. Also, many fungi are so morphologically similar to each other that seeing only a top-down view is not helpful on its own; that view needs to be accompanied by images of the side of the fungus, showing its stipe (if any) colour, texture, shape, size, and attachment to the cap and substrate, volva (if any), ring and/or veil/webbing (if any), not to mention the underside of the fungus and the shape, colour, attachment, etc. of the gills, pores, or other spore-bearing structures). Fungi can also look very different from each other depending on how wet it was when they grew, how dehydrated they are, how old they are, and many other factors.
I am an obsessive photographer of fungi and I am good about taking quality macros of informative features, but 94% of my 2,509 observations of fungi (!) are not research grade because a) I don’t know off the top of my head what taxon they’re in, especially with the name changes since I last took formal courses in mycology, b) iNaturalist’s AI is no help with them/presents obviously incorrect options, and c) I haven’t had time to look up the observations in reliable field guides (books and online). Over time, I plan on going through my past observations and identifying them to the genus level or lower, if possible. In the meantime, if my observations are at all representative, the fungal datasets have a long way to go.
This was super interesting and helpful for identifying what kind of images cause problems. Arthropods on a plant seemed to be a sticking point. I wish the computer vision would provide how confident it is of the identification on the app. Maybe a future feature?
Thank you for pointing this out. Now, I’m curious about how lichens get categorized. My understanding is that lichens are a symbiotic relationship between algae and fungi, so it’s a tricky one I guess !
Like you said, getting a well curated dataset for mushrooms is a real challenge.
I came across this dataset that has 1500 species of Danish fungi (apparently validated by experts). There is very rich metadata like substrate, and habitat too. It looks like people have applied computer vision based classification approaches reasonably well, but I wonder if these species also occur in other geographies.
Yes, lichens are fungi + something photosynthetic (algae, sensu latu, or blue-green bacteria). But lichens behave differently than fungi without the photosynthetic partner, and are kind of their own thing from an ecological perspective.
That Danish dataset looks interesting. Considering that a fair number of fungi present in Eurasia also present in North America, it should be informative at a larger scale than just Denmark.
Hi @tragopan . Thanks a lot for your effort. This looks really promising. I work on the classification and identification as well right now. Therefore I’m very interested in your finding and would love to see the implementation if possible. Especially GradCam is on my list. Awesome work!