Make users more aware of the accuracy field when uploading observations through the web interface

Thank you @pisum, @krancmm, @tonyrebelo and @egordon88 for your insights! :clap:

@pisum I wonder how you discovered that unusual clusters were in fact cell phone towers/center points of Google Maps places?

What about if your using a GPS equipped camera and you don’t know the exact accuracy. I think it’s usually about 10m but is it ok to estimate?

We all have to estimate it, just make sure that circle has all the area observation was probably made in, it doesn’t have to be 10 metres, especially if you see/guess that dot is in a correct spot.

2 Likes

I suggest:

Divide location into two fields, one for position, and one for accuracy. :round_pushpin::straight_ruler:
This will make it very obvious that Accuracy should be recorded.

When setting the crosshairs on the map, have it fill out both position and accuracy (just as it currently does), this should make it easier to tell that the size of the crosshairs are used to provide the accuracy.

Redesign the crosshairs to better convey that you are providing an area :goal_net: rather than a point :dart:.
E.g. highlight the inside of the crosshairs or grey-out the rest of the map.

Also it would be more accurate to call it “precision” rather than “accuracy”.

1 Like

In my local database, I usually rely on the number of digits in coordinates:
41.12345 = 1 m
41.1234 = 10 m
41.123 = 100 m
41.12 = 1 km
In case of manual filling the fields, iNat server can count the digits before converting them into a number and save the estimated accuracy/precision automatically.

1 Like

That’s the way it should work, but Google map carries it out to 15 decimal points, and they are not always accurate to even 5 decimal points.

1 Like

Bumping this request. I like the options 2 and 4 in the original request best. To deal with @tiwane’s concerns, neither would need to produce a pop-up or a warning that needs to be dismissed. They would simply be made more visible without additional clicks, for the interested user who like @colinpurrington, might have been unaware that this field even existed.

That would be a good idea, although it would be important not to implement it for coordinates created in other ways. It would be great if iNat would record only the number of digits in coordinates that make sense in relation to the level of zoom when someone chooses location on the map. It appears to record to 14 decimal places when zoomed out, and 15 when zoomed in. I’m finding it hard to think of a case where you’d need to record more than six digits (10 cm precision).

For smartphones, maybe we’re stuck with whatever they add, although iNaturalist could choose to truncate these values. Apart from silly precision with coordinates, I sometimes see bizarre “accuracy” values with absurd levels of precision on iphone (e.g. an “accuracy” value of 42.993842938461948 metres, or similar, which doesn’t make any sense).

1 Like

For some data uses, people certainly will toss out records that don’t have an accuracy value. Depends on how important the accuracy value is for a particular use and how many records are available. If accuracy is a high priority and there are a lot of records, it makes sense to toss 'em.

But, yeah, it’s certainly not a need, nor important across all use cases.

1 Like

A related suggestion, which I think is close enough to the original intent to include here–I’ll make a separate feature request if that’d be better.

OPTION 5: Add a location accuracy filter to the search options at https://www.inaturalist.org/observations/[user]

The specific context that prompts this: I have no idea why, but my iPhone has been having some intermittent GPS data issues. For instance, in some map applications it’ll stop updating for a while, leaving “me” at some location a couple miles away with a big error circle. When using the iNaturalist app, this manifests as something like 1-3% of the observations over the last few months being 1-5 miles off with accuracy values around 400-1000 m. It’d be easier to catch and correct those errors if I could search my observations for everything with an accuracy over 200 m. As it is, I’ve just been visually scanning recent observations for locations that look “off”. Not a great workflow.

I’m also wondering, as my mind wanders an adjacent tangent, if it might make sense to have a little automated QA / QC thingy that pings when observations imply unrealistic travel speeds. If the observation at 10:51 is five miles from the observation at 10:50, something is probably wrong with the data.

1 Like

if accuracy is really a high priority, you really should use data from only trusted, reliable sources, where you know exactly how the observer is collecting their location data and are comfortable with their methods. otherwise, you really should do some sort of adjusting or correction anyway, and it wouldn’t really matter whether the accuracy value was present or not for most applications.

It’s just a series of context-specific cost-benefit analyses. All data comes with error. Trusting data just means you’re pretending the error is negligible. :-)

For what it’s worth, the context that was coming to mind in particular was habitat modelling using a mix of herbarium specimen locations, rare plant survey results, and a handful of iNaturalist records. The ecological variables that go into the habitat model are mostly rasters with 30 m pixels. Some of the rasters show a lot of variation at fine spatial scales, some don’t. In any case, sampling the right pixels does matter, and you really want accuracy values less than 15ish m. In one of the models I was involved in recently, we did kick out all the herbarium records without accuracy values. Given the nature of the data (botanists > 10 years ago were often really sloppy with GIS data, doing annoying things like GPSing the car and then wandering off on the hillside for a few hours), we thought absence of an accuracy value was good evidence that there wasn’t much care going into the location data. And the set of recent plant survey locations was, well, not large in any real sense but on the larger side for a rare plant habitat model.

In this particular context I think the relative trustworthiness of the location data was rare plant survey > iNaturalist > herbarium records.

1 Like

A lot of people do (did?) that with manually entered data, too. Well, maybe not to 15 places because people get tired of writing numbers before then, but to more places than could possibly be meaningful, in any case. Bugged me when I was managing herbarium specimen data.