The problem with blindly using biodiversity databases

That’s exactly the point. Why should we accept any part of the study as valid when demonstrably and easily disproven stats like claiming a 10 percent error rate on birds are included.
A study of this type is only valid if the observations used in it are weighted in a similar basis to the dataset as a whole.

The following are the insects used in the study>

Poanes - a notoriously difficult to separate group of dull brown and orange butterflies
Agraulis vanillae - a relatively distinctive butterfly
Disholcaspis cinerosa - a gall wasp which are most often submitted as egg cases and easy to get wrong
Aquarius - water strider genus
Belostomatidae - Giant water bugs
Enithares - Backswimmers
Lethocerus - water bug genus
Corixidae - Water boatman genus
Laccotrephes - water scorpion genus
Lethocerus griseus - water bug species

So you have a known difficult butterfly genus, a distinctive butterfly genus, a gall wasp and multiple aquatic insects as the dataset.

That’s not representative of the observations on the site in any way.

There are currently just over 5million research grade records of insects in North America. 25% of those are butterflies, 12% are odonata. These are small well studiied popular groups. Another 25% are moths which while larger in species count are also well studied and popular. So you have over 60% of the records just from those 3 groups alone where there is virtually no chance there is a 35% error rate.

All this study ‘proves’ is one or more aquatic insect expert has questions about the accuracy of a group of taxa that represent a small minority of the overall insects on the site. There is no way in the world it is statistically robust enough to claim a 35% error rate on insects, or any of the other statistical claims it makes.

6 Likes