The problem with blindly using biodiversity databases

All data needs to be reconciled, you can not trust any data, even for instance on-line measurements in the industry. Data-reconciliation is a must before doing any evaluation. How you can trust scientific papers with faunistic data, are they better than inaturalist because they are written by scientists (of which some are less in the field than hobbists)? I do not think so, here at least you can always check doubtfull data and see photo/s. You do not have this luxury with most published data. It would be easy just to take thousands of hours of work of freelancers and just plot the data. Some work needs to be done also by those who want to publish them ;-)

1 Like

Wasn’t it tested that iNat error is no bigger than in museums?

2 Likes

No idea, was it? For which taxon groups?

So long as iNat tips - this is not wild - out of needs ID, many people are reluctant.
iNat needs a gracious way to split Is it Wild from Needs ID
Paired with an easy way for power users to avoid casual (which they can’t now, because it isn’t marked as casual, because we want an ID) Vicious circle.

3 Likes

I don’t disagree. That doesn’t absolve the users of data from understanding the nature of 3rd party information and curating it properly before using it in (meta)analyses.

4 Likes

It was, but can’t find it again. @tiwane ?

2 Likes

Maybe referring to this? https://www.inaturalist.org/journal/loarie/10016-identification-quality-experiment-update

3 Likes

Interesting you bring that up. Now that there are museum specimen lists online, with collection locality, errors of this kind are also becoming easier to find. I was looking at Cyclanthaceae to determine if any occurred in the Dominican Republic, and I did indeed find a single record. But the name of the locality didn’t look right for a Spanish-speaking country, and the known range of the taxon included the Lesser Antilles, but not the Greater Antilles. Playing a hunch, I looked for a place with a name like that on Google Maps, and sure enough, it was in Dominica, not the Dominican Republic. I then contacted the curators of the specimen list with my findings. I can’t remember what museum it was now, so I have not been able to follow up to see if they corrected it.

4 Likes

and this (I think based on that):
https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507

4 Likes

I agree with this completely. I have worked with a lot of scientifically rigorous data that also had plenty of issues. How we choose to use data and the assumptions we make is an important part of using any data.

It is also important to point out that sometimes you just have to use data that you know has issues because like it or not, it is the best available data. You just need to be clear about your assumptions and identify those issues and risks clearly and plainly. I go by the view that folks can disagree with the the data used if they like, but they then need to step up and identify better data. If they cannot, then its irrelevant because you cannot always wait for perfect data to make decisions.

6 Likes

That paper itself appears to be deeply flawed. As noted by Rod Page, they count synonyms and specimens undetermined beyond genus as “incorrectly named”, don’t state clearly what “wrong” means beyond that, and most fundamentally, don’t provide the data so anyone else can check.

They also cite the revision of Aframomum that is the source of the correct names as being from 2014, when in fact it wasn’t published until three years after this paper, in 2018. Even then it was published in a small-run print-only format, so many herbaria probably still have not updated.

The paper provides zero evidence of widespread misidentification in museum collections. It certainly does happen, but as someone who works at one, it’s at nowhere near the level that it does on iNat. One of the advantages of this and similar sites is that it can maintain consistent nomenclature, which museums have a hard time doing. But having tree heliotrope listed under the name Tournefortia argentea instead of Heliotropium arborea is not “mistaken identity”.

2 Likes

Thanks! The discussion comments under Rod Page’s blogpost are interesting too. Still looking for an apples-to-apples comparison to some of the ID quality metrics that kueda posted back in 2019…

I saw some data from iNat that said the ID’s were decent (70%?). I looked for it after reading this, but I could not find it. I remember using the link to counter someone who said iNat data were poor.

2 Likes

I’m not sure how you can do an apples to apples. One concern is how do you classify the digital platform observations that may or may not be wrong, ie their identity cant conclusively be confirmed.

I’ve seen at least 1 study where all those got called wrongly identified and the study conclusion was thus the ID quality was poorer on digital platforms. I remember being provoked by the study claiming a high error rate on birds on inat which are one of the easiest groups to identify. But likewise I can’t find that thread here.

2 Likes

In the link posted above by @kiwifergus Kueda states :

“accuracy varies considerably by taxon, from 91% accurate in birds to 65% accurate in insects”.

Making the latter part bold, as there are 90000 or so species of insect in N.America but only 2000 or so bird species, so, arguably the 65% accuracy is the more relevant end of the benchmark. This is in a N.American context I think also(?), so likely less accuracy in most other locations.

Meanwhile I´ve seen the museum comment raised by @fffffffff and @dianastuder repeated many times elsewhere - but as far as I saw when I last looked into this, it just seemed to stem from an anecdotal comment/supposition …its not in connection with any actual figures. In the link posted above by @tiwane the only mention of museum quality seems to be in an offhand comment by TonyRebelo.

I struggle to believe any respectable museum insect collection would have a comparable 65% accuracy.

Not that I think iNat is doing a bad job! Its clear its come on leaps and bounds in UK obs this last year. But, there does seem to be a bit of an echo chamber around some of these stats and statements on the forum…which is problematic.

4 Likes

Highly doubt birds have high error rate, there’re 10 people checking each bird observation.

@sbushes well, links were posted before, personally I’m not good enough in saving them, though I tought it all was at least about RG observattions, there’re far less mistakes in those, if we could eliminate blind agreeing it’ll be up to 90% of true ids (complex groups will always lead to mistakes, plus most iNat obs don’t have a specimen itself, unlike museums).

3 Likes

The Kueda link stats are with regard to RG data.

I’m sure I dug through the links to the museum/herbarium accuracy comments previously… it just ended up at a single anecdote that has been regurgitated over and over since then in a way similar to this thread. I don’t believe there was ever a solid source, though it would be great to see one / be proved wrong.

The idea that iNaturalist accuracy will compare to museum accuracy seems conceptually doomed from the outset to me. At least in the UK, something like the Natural History Museum has the largest collection - with the bulk of our type specimens and the bulk of our taxonomists. These are literally the people making the keys and staking out the information we are using to make IDs here! It seems really counterintuitive to think that iNaturalist accuracy could be likely equivalent to museum collection accuracy in its current design.
In taxa where we have experts such as those from the NHM engaged on iNaturalist - like Tachinidae - we have similar accuracy, sure. In taxa like millipedes, where we have almost nobody active, we seem to have very little accuracy. It’s just inevitably going to be patchier.

2 Likes

If you want a weighted average, I think it would be better to do it based on number of posts of birds : insects rather than number of species in the continent. That should give you a measure of accuracy on iNaturalist, rather than a measure of how easily bird photos are identified compared to insects, as the insects photographed will be biassed towards the big eye-catching ones, whereas most of the 90000 species will be inconspicuous flies, parasitic wasps and beetles which will be unidentifiable without photos taken down a microscope.

3 Likes

That’s exactly the point. Why should we accept any part of the study as valid when demonstrably and easily disproven stats like claiming a 10 percent error rate on birds are included.
A study of this type is only valid if the observations used in it are weighted in a similar basis to the dataset as a whole.

The following are the insects used in the study>

Poanes - a notoriously difficult to separate group of dull brown and orange butterflies
Agraulis vanillae - a relatively distinctive butterfly
Disholcaspis cinerosa - a gall wasp which are most often submitted as egg cases and easy to get wrong
Aquarius - water strider genus
Belostomatidae - Giant water bugs
Enithares - Backswimmers
Lethocerus - water bug genus
Corixidae - Water boatman genus
Laccotrephes - water scorpion genus
Lethocerus griseus - water bug species

So you have a known difficult butterfly genus, a distinctive butterfly genus, a gall wasp and multiple aquatic insects as the dataset.

That’s not representative of the observations on the site in any way.

There are currently just over 5million research grade records of insects in North America. 25% of those are butterflies, 12% are odonata. These are small well studiied popular groups. Another 25% are moths which while larger in species count are also well studied and popular. So you have over 60% of the records just from those 3 groups alone where there is virtually no chance there is a 35% error rate.

All this study ‘proves’ is one or more aquatic insect expert has questions about the accuracy of a group of taxa that represent a small minority of the overall insects on the site. There is no way in the world it is statistically robust enough to claim a 35% error rate on insects, or any of the other statistical claims it makes.

6 Likes

Careful iders were tested, as most of bad data goes from new and short-participated (upload 20 pics with AI suggestions and delete the app) users, if we look at experts their id rate will be the same as museums’. As same people work with both datasets. And most of weird ids don’t go to RG, we see them a lot, but we see them because we divide them from "normal’ ones and remember better than thousands of thousands of “good” observations.

4 Likes