Downloading a CSV of all observations of a species with python

if you’re simply visualizing things in a map, it seems to me like it would be really inefficient to download all observations first. it would be more efficient to just get the observation map tiles, plus associated UTF grids, if needed. (this is especially true if you’re trying to map millions of observations.)

then you can quickly map whatever you like. for example: https://jumear.github.io/stirfry/iNat_map.html?taxon_id=459050&verifiable=true&place_id=97394

it’s also possible to map using just the UTF grids, though that’s a little harder to code. example: https://jumear.github.io/stirfry/iNat_UTFgrid_based_density_map_for_Leaflet.html?defaultstyle=gradient&place_id=97394&taxon_id=459050&scale_factor=5

generally, if you don’t need observation-level detail, it’s more efficient to get aggregated data instead.

i also don’t understand why you necessarily need CSV output if you decide you must get observation details. although the old API does allow you to return observations in CSV format (ex. https://www.inaturalist.org/observations.csv?place_id=97394&taxon_id=459050), why is that necessary, as opposed to getting data in JSON?

3 Likes