Having an accuracy radius on your location can be important to some data users

is this some sort of actual regulation or law? or is this just some arbitrary personal rule or bureaucractic prioritization thing, such as:

so if you have multiple occurence records with no accuracy value recorded from roughly the same location, you’re saying that is of absolutely no value? but if someone adds an arbitrary accuracy value to two of these records, then magically, it’s valuable?

if i’m some sort of rogue activist, i could just go around adding fake multiple records with accuracy values, and i can stop development? nobody’s going to go and actually survey / groundtruth the land to check the data posted by randos on the internet?

this is totally not true. bulk adding accuracy values just means someone added an accuracy value. it doesn’t say anything about whether the observer assesssed their location or accuracy or whether they just added the same arbitrary value to all their observations.

here’s a real example. there’s a pond at a park in my area that has some Nymphoides plants. those plants are isolated to a single pond that is labeled as “Arboretum Lake” on Google Maps and the “Meadow Pond” on OSM.

Below are all the verifiable observations of Nymphoides in Memorial Park. note that there are observations clustered at the pond, but also at the Arboretum headquarters building, and a few others scattered throughout the park.

A lot of people mention that scientists who care about fine-scale accuracy will filter for observations with accuracy <= 50m. so let’s do that. note that you’ve fitlered out a few of the outliers, but you still have a cluster at Arboretum headquarters, and there are still a few one-offs.

when i look at these outliers and compare them to other observations their observers have made, half (2) appear to be cases where people added an arbitrary accuracy value , and the other half (2) appear to be cases where iOS determined a location with a very low accuracy value (but was very wrong).

now let’s look at our observations that had no accuracy recorded. there’s one at Arboretum headquarters, and there’s another one-off. did you really improve the quality of the data by excluding these observations?

if you don’t like that example, here’s another example of a patch of Alligator Flag isolated to another pond:

pick any example of an organism that you know exists at a particular location, and do the same examination of the observations in iNaturalist. does excluding observations with no accuracy help in any of these cases?

here’s another example i’ve provided in the past with a different sort of visualization: https://forum.inaturalist.org/t/location-accuracy-too-easily-bypassed/18547/31.

you would think that the people who are the most concerned about accuracy values would also be the ones most concerned about having and understanding of what those accuracy values actually mean.

if you’re just going to wave off inconsistencies in locations / accuracies, why not just wave off the lack of accuracy values, too? what’s the difference?

to me a better analogy to the inconsistencies in the locations / accuracies would be like allowing folks to identify their observations using any taxonomy they want without noting which framework they’re using. for example, i could have 3 different people observe the same plant. one of them could call it Hibiscus lasiocarpos, another could call it Hibiscus moscheutos ssp. lasiocarpos, and the last one could call it Hibiscus moscheutos, and they would all be right according to their view of how things should be classified. the issue isn’t that anyone is wrong. the issue is that they’re inconsistent. would you use that data as is without dealing with those inconsistencies?

1 Like