I gave a talk on data quality on iNaturalist at the Southern California Botanists 2019 symposium recently, and I figured some of the slides and findings I summarized would be interesting to everyone, so here goes.
Accuracy of Identifications in Research Grade Observations
Some of you may recall we performed a relatively ad hoc experiment to determine how accurate identifications really are. Scott posted some of his findings from that experiment in blog posts (here and here), but I wanted to summarize them for myself, with a focus on how accurate âRGâ observations are, which here Iâm defining as obs that had a species-level Community Taxon when the expert encountered them. Hereâs my slide summarizing the experiment:
And yes, https://github.com/kueda/inaturalist-identification-quality-experiment/blob/master/identification-quality-experiment.ipynb does contain my code and data in case anyone wants to check my work or ask more questions of this dataset.
So again, looking only at expert identifications where the observation already had a community opinion about a species-level taxon, hereâs how accuracy breaks down for everything and by iconic taxon:
Some definitions
- accurate: identifications where the taxon the expert suggested was the same as the existing observation taxon or a descendant of it
- inaccurate: identifications where the taxon the expert suggested was not same as the existing observation taxon and was also not a descendant or ancestor of that taxon
- too specific: identifications where the taxon the expert suggested was an ancestor of the observation taxon
- imprecise: identifications where the taxon the expert suggested was a descendant of the observation taxon
Close readers may already notice a problem here: my filter for âRGâ observation is based on whether or not we think the observation had a Community Taxon at species level at the time of the identifications, while my definitions of accuracy are based on the observation taxon. Unfortunately, while we do record what the observation taxon was at the time an identification gets added, we donât record what the community taxon, so we canât really differentiate between RG obs and obs that would be RG if the observer hadnât opted out of the Community Taxon. Iâm assuming those cases are relatively rare in this analysis.
Anyway, my main conclusions here are that
- about 85% of Research Grade observations were accurately identified in this experiment
- accuracy varies considerably by taxon, from 91% accurate in birds to 65% accurate in insects
In addition to the issues I already raised, there were some serious problems here:
Since I was presenting to a bunch of Southern California botanists, I figured Iâd try repeating the analysis assuming some folks in the audience were infallible experts, so I exported identifications by jrebman, naomibot, and keirmorse (all SoCal botanists I trust) and made the same chart:
jrebman has WAY more IDs in this dataset than either of the other two botanists, and heâs added way more identifications than were present in the 2017 Identification Quality Experiment. Iâm not sure if heâs infallible, but heâs a well-established systematic botanist at the San Diego Natural History Museum, so heâs probably as close to an infallible identifier as we can get.
Anyway, note that weâre a good 8-9 percentage points more accurate here. Maybe this is due to a bigger sample, maybe this is due to Jonâs relatively unbiased approach to identifying (heâs not looking for Needs ID records or incorrectly identified records, he just IDs all plants within his regions of interest, namely San Diego County and the Baja peninsula), maybe this pool of observations has more accurate identifiers than observations as a whole, maybe people are more interested in observing easy-to-identify plants in this set of parameters (doubtful). Anyway, I find it interesting.
Thatâs it for identification accuracy. If you know of papers on this or other analyses, please include links in the comments!
Accuracy of Automated Suggestions
I also wanted to address what we know about how accurate our automated suggestions are (aka vision results, aka âthe AIâ). First, it helps to know some basics about where these suggestions come from. Hereâs a schematic:
The model is a statistical model that accepts a photo as input and outputs a ranked list of iNaturalist taxa. We train the model on photos and taxa from iNaturalist observations, so the way it ranks that list of output taxa is based on what itâs learned about what visual attributes are present in images labeled as different taxa. Thatâs a gross over-simplification, of course, but hopefully adequate for now.
The suggestions you see, however, are actually a combination of vision model results and nearby observation frequencies. To get those nearby observations, we try to find a common ancestor among the top N model results (N varies with each new model, but in this figure N = 3). Then we look up observations of that common ancestor within 100km of the photo being tested. If there are observations of taxa in those results that werenât in the vision results, we inject them into the final results. We also re-order suggestions based on their taxon frequencies.
So with that summary in mind, hereâs some data on how accurate we think different parts of this process are.
Model Accuracy (Vision only)
There are a lot of ways to test this, but here weâre using photos of taxa the model trained on exported at the time of training but not included in that training as inputs, and âaccuracyâ is how often the model recommends the right taxon for those photos as the top result. Weâve broken that down by iconic taxon and by number of training images. I believe the actual data points here are taxa and not photos, but Alex can correct me on that if Iâm wrong.
So main conclusions here are
- Median accuracy is between 70 and 85% for taxa the model knows about
- Accuracy varies widely within iconic taxa, and somewhat between iconic taxa
- Number of training images makes a difference (generally more the better, with diminishing returns)
Overall Accuracy (Vision + Nearby Obs)
This chart takes some time to understand, but itâs the results of tests we perform on the whole system, varying by method of defining accuracy (top1, top10, etc) and common ancestor calculation parameters (what top YY results are we looking at for determining a common ancestor, what combined vision score threshold do we accept for a common ancestor).
My main conclusions here are
- The common ancestor, i.e. what you see as âWeâre pretty sure itâs in this genus,â is very accurate, like in the 95% range
- Top1 accuracy is only about 64% when we include taxa the model doesnât know about. That surprised me b/c anecdotally it seems higher, but keep in mind this test set includes photos of taxa the model doesnât know about (i.e. it cannot recommend the right taxon for those photos), and Iâm biased toward seeing common stuff the model knows about in California
- Nearby observation injection helps a lot, like 10 percentage points in general
Conclusions
- Accuracy is complicated and difficult to measure
- What little we know suggests iNat RG observations are correctly identified at least 85% of the time
- Vision suggestions are 60-80% accurate, depending on how you define âaccurate,â but more like 95% if you only accept the âweâre pretty sureâ suggestions
Hope that was interesting! Another conclusion was that Iâm a crappy data scientist and I need to get more practice using iPython notebooks and the whole Python data science stack.