I was in costa rica last year and I just continued uploading some of the photos, when I noticed that my observation of a Potoo was in no way unique: There are 12 observations of the same exact specimen in the same exact spot uploaded by 12 different people within a period of 3 weeks.
I am just wondering if there should be something done about this because it feels like this falsifies statistics for Potoo observations based on the argument that these 12 observers (myself included) were informed of the location by others rather than having found it themselves…
iNat observations record encounters between an observer and an organism, so each of these are fine. But yes, because of this iNat isn’t great for recording abundance records because observations can be separate records of the same organism. It’s just how iNat data are structured.
A similar thing happened for a potoo we were shown by a guide in Panama last year, although there are only 3 observers.
Or, this rather photogenic (but dead) bristlecone pine (most, but not all, observations in the square are of the same specimen). I photographed it this fall, but chose to post another less famed (and still living) specimen instead.
I’m sure there were more observations there of these few specimens. In particular, I helped identify 100+ times the Senna bicapsularis/pendula with the red grid in the background. It seems that many observations were deleted. So, efforts to identify many observations of that(these) specimen(s) ended up ~useless.
This is something that researchers mostly know that they need to account for. It’s a well-known phenomenon that also affects other platforms (for example, on eBird, the percentages will make it look like Lazuli Buntings are not that unusual in southeastern Ohio in January).
I believe with some math you can still wring some more use out of the data, but also, not all data sets are equally appropriate for all types of research, and that’s okay.